Adopting AI in Australian Healthcare

I recently spent a couple of days at the Australasian Institute of Digital Health’s “AI.Care” conference in Brisbane. I came away equal parts energised and sobered. Energised, because there’s real momentum: AI is no longer a speculative future in healthcare. It’s already in the room. Sobered, because the biggest barriers to adoption aren’t technical. They’re human, organisational, legal, and cultural.

AI.Care’s framing was practical: move beyond hype, learn from real deployments, and figure out what it takes to go from pilot to scale. The stated themes ranged from adoption and governance to workforce, privacy, security, consumer trust, data sovereignty and the role of national collaboration in Australia’s approach to AI in care. In other words: not “what can the model do?” but “what does it take to make this safe, useful, and sustainable in real health systems?”

Here are the ideas that stuck with me most — and why I think they matter for Australia’s next phase of AI adoption.

1) Adoption is a workflow problem before it’s a model problem

One message came through loudly: the technical performance of a model is rarely the reason an AI deployment succeeds or fails.

Healthcare is high-pressure, time-poor, and risk-intolerant. Workflows are complex, full of exceptions, and stitched together across teams, systems, and handoffs. If you introduce AI without understanding where the real friction sits, you end up building a shiny solution that nobody uses — or worse, something that quietly makes work harder.

The most useful lens I heard was deceptively simple:

  • Slow down and find the real pain point

  • Map the workflow

  • Quantify what matters (time saved, clinical impact, operational impact, opportunity cost)

  • Design around dependencies, not wishful thinking.

This is the opposite of “we bought a tool — now find a use case.”

2) Governance isn’t paperwork — it’s the operating system

AI governance can sound like a compliance afterthought, but at AI.Care it was treated more like core infrastructure. A few themes repeated across sessions:

  • You need board and executive-level visibility for material AI risk.

  • You need embedded governance that lifts success rates, not slows things down.

  • You need clinical safety plans for each deployment — not just a generic framework.

  • You need clarity on whether a tool is functionally acting like a medical device (and what that implies).

  • You need documentation and auditability — because “trust me” isn’t a strategy.

The really important shift is this: governance isn’t a policy document. It’s a set of routines, gates, and accountabilities that make “safe AI” the default way of working.

3) Trust, privacy and sovereignty are adoption constraints — not footnotes

A lot of AI discussions still treat privacy and sovereignty as secondary issues: something to “handle later”.

But the reality is that in healthcare, trust is the product.

Two concerns felt particularly Australian:

  • Re-identification risk is real: Even when datasets are “de-identified”, cross-matching can re-identify individuals. It’s not theoretical — it’s a predictable outcome of combining rich datasets. If governance assumes otherwise, you’re building on sand.

  • Data sovereignty is not just legal — it’s cultural: Questions of data ownership and control aren’t abstract when it comes to health data. The stakes include cultural considerations, community trust, and the very real fear that value is extracted from Australian data by offshore corporations while Australians lose control over how it’s used.

Put bluntly: if you can’t explain to people how their data is being used — and why that’s safe and worthwhile — you will not scale AI in a way that lasts.

4) The rise of consumer AI changes the landscape overnight

Another recurring theme: consumers are already using AI for health advice. Whether we like it or not, “AI-assisted health decision-making” is happening in the wild.

That has a few immediate implications:

  • It may widen inequities, because some people have access to tools, literacy, and confidence — and others don’t.

  • It blurs the boundary between “information” and “health advice”.

  • It increases pressure on health services to respond to patients arriving with AI-generated suggestions, interpretations, and fears.

This isn’t a reason to panic — but it is a reason to treat consumer AI as part of the healthcare context, not a separate phenomenon.

5) Evidence has to be local, and assurance has to be continuous

A key point for adoption is that models don’t just need to work in principle — they need to work here, in context, across the populations we serve, within the constraints of our systems.

Two things stood out:

  • “Under-representation” in datasets is a safety risk, not a statistical footnote.

  • “Set and forget” doesn’t work: systems need monitoring for drift, unintended harm, and real-world performance.

This is especially relevant where tools update frequently. Regulation and assurance processes need to handle algorithm changes without treating every update like a full reset — but also without letting updates happen silently.

6) Procurement is where a lot of AI risk hides

One of the most practical sessions was essentially a reminder that many AI risks are contractual, not technical.

If you don’t lock down the basics, you can end up with:

  • vague data rights

  • unclear IP ownership

  • limited auditability

  • weak vendor assurances

  • poor portability/exit pathways

  • and a tool that becomes embedded before anyone has asked “what happens if we need to change?”

In other words: if governance is the operating system, procurement is a major supply chain risk.

Australia needs more “default patterns” here — minimum clauses, standard due diligence checklists, and shared approaches across organisations so everyone doesn’t reinvent the wheel.

7) The pilot-to-production gap is where most things die

The most honest part of the conference was the repeated recognition that many organisations can do pilots. Fewer can do production deployments. Very few can scale.

Why? Because production isn’t just “the pilot, but bigger”. Production needs:

  • clear accountability

  • monitoring and incident response

  • security and privacy baked in

  • clinical safety plans

  • workforce training and upskilling

  • and governance that handles updates and drift

It’s less “innovation” and more “industrialisation”.

So what does this mean for AI adoption in Australia?

If I had to boil down AI.Care Brisbane into a few adoption imperatives for Australia, they’d be these:

  1. Make workflow the unit of change, not the model.

  2. Build governance as an operating system (accountabilities + gates + safety plans).

  3. Treat trust, privacy, and sovereignty as core adoption constraints.

  4. Design for equity from the start, including local validation and subgroup testing.

  5. Professional responsibility doesn’t disappear — clinicians and services remain accountable.

  6. Standardise procurement guardrails so we stop repeating avoidable mistakes.

  7. Plan for scaling on day one, not after the pilot “works”.

AI is coming into care systems whether we feel ready or not. The organisations that do best won’t be the ones with the flashiest models — they’ll be the ones that can reliably turn promising tools into safe, trusted practice.

And that, in a nutshell, is what AI.Care felt like: less about the “future of AI”, and more about the hard (but solvable) work of making AI real in the Australian context.